Most Resource Efficient Matrix Vector Multiplication on FPGAs

نویسندگان

چکیده

Fast and resource-efficient inference in artificial neural networks (ANNs) is of utmost importance drives many new developments the area hardware architectures, e.g., by means systolic arrays or algorithmic optimization such as pruning. In this paper, we present a novel method for lowering computation effort ANN utilizing ideas from information theory. Weight matrices are sliced into submatrices logarithmic aspect ratios. These slices then factorized. This reduces number required computations without compromising on fully parallel processing. We create architecture dedicated purpose. also provide tool to map these factorized efficiently reconfigurable hardware. By comparing state art FPGA implementations, can prove our claim resources measured look-up-tables (LUTs) factor three six. Our does not rely any particular property weight ANN. It works general task multiplying an input vector with constant matrix suitable digital signal processing beyond ANNs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Matrix-Vector Multiplication on FPGAs

Floating-point Sparse Matrix-Vector Multiplication (SpMXV) is a key computational kernel in scientic and engineering applications. The poor data locality of sparse matrices signicantly reduces the performance of SpMXV on general-purpose processors, which rely heavily on the cache hierarchy to achieve high performance. The abundant hardware resources on current FPGAs provide new opportunities to...

متن کامل

Reconfigurable Sparse Matrix-Vector Multiplication on FPGAs

executing memory-intensive simulations, such as those required for sparse matrix-vector multiplication. This effect is due to the memory bottleneck that is encountered with large arrays that must be stored in dynamic RAM. An FPGA core designed for a target performance that does not unnecessarily exceed the memory imposed bottleneck can be distributed, along with multiple memory interfaces, into...

متن کامل

Mapping Sparse Matrix-Vector Multiplication on FPGAs

Higher peak performance on Field Programmable Gate Arrays (FPGAs) than on microprocessors was shown for sparse matrix vector multiplication (SpMxV) accelerator designs. However due to the frequent memory movement in SpMxV, system performance is heavily affected by memory bandwidth and overheads in real applications. In this paper, we introduce an innovative SpMxV Solver, designed for FPGAs, SSF...

متن کامل

Efficient Sparse Matrix-Vector Multiplication on CUDA

The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many high-performance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrix-v...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3234622